skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM ET on Friday, February 6 until 10:00 AM ET on Saturday, February 7 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Jensen, Paul A"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Background The rapid advancement of artificial intelligence (AI) is reshaping industrial workflows and workforce expectations. After its breakthrough year in 2023, AI has become ubiquitous, yet no standardized approach exists for integrating AI into engineering and computer science undergraduate curricula. Recent graduates find them- selves navigating evolving industry demands surrounding AI, often without formal preparation. The ways in which AI impacts their career decisions represent a critical perspective to support future students as graduates enter AI-friendly industries. Our work uses social cognitive career theory (SCCT) to qualitatively investigate how 14 recent engineering graduates working in a variety of industry sectors perceived the impact of AI on their careers and industries. Results Given the rapid and ongoing evolution of AI, findings suggested that SCCT may have limited applicability until AI technology has matured further. Many recent graduates lacked prior exposure to or a clear understanding of AI and its relevance to their professional roles. The timing of direct, practical exposure to AI emerged as a key influ- ence on how participants perceived AI’s impact on their career decisions. Participants emphasized a need for more customizable undergraduate curricula to align with industry trends and individual interests related to AI. While many acknowledged AI’s potential to enhance efficiency in data management and routine administrative tasks, they largely did not perceive AI as a direct threat to their core engineering functions. Instead, AI was viewed as a supplemen- tal tool requiring critical oversight. Despite interest in AI’s potential, most participants lacked the time or resources to independently pursue integrating AI into their professional roles. Broader concerns included ethical considerations, industry regulations, and the rapid pace of AI development. Conclusions This exploratory work highlights an urgent need for collaboration between higher education and industry leaders to more effectively integrate direct, hands-on experience with AI into engineering education. A personalized, context-driven approach to teaching AI that emphasizes ethical considerations and domain-specific applications would help better prepare students for evolving workforce expectations by highlighting AI’s relevance and limitations. This alignment would support more meaningful engagement with AI and empower future engineers to apply it responsibly and effectively in their fields. 
    more » « less
  2. The National Science Foundation Research Initiation in Engineering Formation (RIEF) program aims to increase research capacity in the field by providing funding for technical engineering faculty to learn to conduct engineering education research through mentorship by an experienced social science researcher. We use collaborative autoethnography to study the tripartite RIEF mentoring relationship between Julie, an experienced engineering education researcher, and two novice education researchers who have backgrounds in biomedical engineering—Paul, a biomedical engineering faculty member and major professor to the second novice, Deepthi, a graduate student. We ground our work in the cognitive apprenticeship model and Eby and colleagues’ mentoring model. ResultsUsing data from written reflections and interviews, we explored the role of instrumental and psychosocial supports in our mentoring relationship. In particular, we noted how elements of cognitive apprenticeship such as scaffolding and gradual fading of instrumental supports helped Paul and Deepthi learn qualitative research skills that differed drastically from their biomedical engineering research expertise. We initially conceptualized our tripartite relationship as one where Julie mentored Paul and Paul subsequently mentored Deepthi. Ultimately, we realized that this model was unrealistic because Paul did not yet possess the social science research expertise to mentor another novice. As a result, we changed our model so that Julie mentored both Paul and Deepthi directly. While our mentoring relationship was overall very positive, it has included many moments of miscommunication and misunderstanding. We draw on Lent and Lopez’s idea of relation-inferred self-efficacy to explain some of these missed opportunities for communication and understanding. ConclusionsThis paper contributes to the literature on engineering education capacity building by studying mentoring as a mechanism to support technically trained researchers in learning to conduct engineering education research. Our initial mentoring model failed to take into account how challenging it is for mentees to make the paradigm shift from technical engineering to social science research and how that would affect Paul’s ability to mentor Deepthi. Our experiences have implications for expanding research capacity because they raise practical and conceptual issues for experienced and novice engineering education researchers to consider as they form mentoring relationships. 
    more » « less
  3. Abstract BackgroundGenerative artificial intelligence (AI) large‐language models (LLMs) have significant potential as research tools. However, the broader implications of using these tools are still emerging. Few studies have explored using LLMs to generate data for qualitative engineering education research. Purpose/HypothesisWe explore the following questions: (i) What are the affordances and limitations of using LLMs to generate qualitative data in engineering education, and (ii) in what ways might these data reproduce and reinforce dominant cultural narratives in engineering education, including narratives of high stress? Design/MethodsWe analyzed similarities and differences between LLM‐generated conversational data (ChatGPT) and qualitative interviews with engineering faculty and undergraduate engineering students from multiple institutions. We identified patterns, affordances, limitations, and underlying biases in generated data. ResultsLLM‐generated content contained similar responses to interview content. Varying the prompt persona (e.g., demographic information) increased the response variety. When prompted for ways to decrease stress in engineering education, LLM responses more readily described opportunities for structural change, while participants' responses more often described personal changes. LLM data more frequently stereotyped a response than participants did, meaning that LLM responses lacked the nuance and variation that naturally occurs in interviews. ConclusionsLLMs may be a useful tool in brainstorming, for example, during protocol development and refinement. However, the bias present in the data indicates that care must be taken when engaging with LLMs to generate data. Specially trained LLMs that are based only on data from engineering education hold promise for future research. 
    more » « less